Search Results: "bart"

26 June 2016

Clint Adams: A local script for local people

This isn't actually answering the question, but it's close. It's also horrible, so whoever adopts Enrico's script should also completely rewrite this or burn it along with the stack of pizza boxes and the grand piano. Input:
#!/bin/zsh
set -e
PATHS=$(tempfile)
NEWKEYS=$(tempfile)
NEWKEYRING=$(tempfile)
FARTHEST_TEN=$(tempfile)
trap "rm -f $ PATHS  $ NEWKEYS  $ NEWKEYRING  $ FARTHEST_TEN " EXIT
keyring=$ 1:-ksp-dc16.gpg 
myfpr=$ 2:-2100A32C46F895AF3A08783AF6D3495BB0AE9A02 
#keyserver=$ 3:-http://pool.sks-keyservers.net:11371/ 
# this doesn't handle hokey fetch failures
#(for fpr in $(hkt list --keyring $ keyring  --output-format JSON   jq '.[].publickey.fpr')
#do
#  hokey fetch --keyserver "$ keyserver " --validation-method MatchPrimaryKeyFingerprint "$ (Q)fpr "
#done) >$ NEWKEYS 
#
#gpg2 --no-default-keyring --keyring $ NEWKEYRING  --import $ NEWKEYS 
cp "$ keyring " "$ NEWKEYRING "
gpg2 --no-default-keyring --keyring $ NEWKEYRING  --refresh
hkt findpaths --keyring $ NEWKEYRING  '' '' '' > $ PATHS 
id=$(awk -F, "/$ myfpr )\$/  sub(/\(/,BLANKY,\$1);print \$1; " $ PATHS )
grep -e ",\[$ id ," -e ",$ id \]" $ PATHS    sort -n   tail -n 10 > $ FARTHEST_TEN 
targetids=($ (f)"$ $((sed 's/^.*\[//;s/,.*$//;' $ FARTHEST_TEN ; sed 's/\])$//;s/.*,//;' $ FARTHEST_TEN )   sort -n -u   grep -v "^$ id $") " )
targetfprs=($(for i in $ targetids ; do awk -F, "/\($ i ,[^[]/  sub(/\)/,BLANKY,\$2); print \$2 " $ PATHS ; done))
gpg2 --no-default-keyring --keyring $ NEWKEYRING  --list-keys $ targetfprs 
Output:
pub   rsa4096/0x664F1238AA8F138A 2015-07-14 [SC]
      Key fingerprint = 3575 0B8F B6EF 95FF 16B8  EBC0 664F 1238 AA8F 138A
uid                   [ unknown] Daniel Lange <dl.ml1@usrlocal.de>
sub   rsa4096/0x03BEE1C11DB1954B 2015-07-14 [E]
pub   rsa4096/0xDF23DA3396978EB3 2014-09-05 [SC]
      Key fingerprint = BBBC 58B4 5994 CF9C CC56  BCDA DF23 DA33 9697 8EB3
uid                   [  undef ] Michael Meskes <michael@fam-meskes.de>
uid                   [  undef ] Michael Meskes <meskes@postgresql.org>
uid                   [  undef ] Michael Meskes <michael.meskes@credativ.com>
uid                   [  undef ] Michael Meskes <meskes@debian.org>
sub   rsa4096/0x85C3AFFECF0BF9B5 2014-09-05 [E]
sub   rsa4096/0x35D857C0BBCB3B25 2014-11-04 [S]
pub   rsa4096/0x1E953E27D4311E58 2009-07-12 [SC]
      Key fingerprint = C2FE 4BD2 71C1 39B8 6C53  3E46 1E95 3E27 D431 1E58
uid                   [  undef ] Chris Lamb <chris@chris-lamb.co.uk>
uid                   [  undef ] Chris Lamb <lamby@gnu.org>
uid                   [  undef ] Chris Lamb <lamby@debian.org>
sub   rsa4096/0x72B3DBA98575B3F2 2009-07-12 [E]
pub   rsa4096/0xDF6D76C44D696F6B 2014-08-15 [SC] [expires: 2017-06-03]
      Key fingerprint = 1A6F 3E63 9A44 67E8 C347  6525 DF6D 76C4 4D69 6F6B
uid                   [ unknown] Sven Bartscher <sven.bartscher@weltraumschlangen.de>
uid                   [ unknown] Sven Bartscher <svenbartscher@yahoo.de>
uid                   [ unknown] Sven Bartscher <kritzefitz@debian.org>
sub   rsa4096/0x9E83B071ED764C3A 2014-08-15 [E]
sub   rsa4096/0xAEB25323217028C2 2016-06-14 [S]
pub   rsa4096/0x83E33BD7D4DD4CA1 2015-11-12 [SC] [expires: 2017-11-11]
      Key fingerprint = 0B5A 33B8 A26D 6010 9C50  9C6C 83E3 3BD7 D4DD 4CA1
uid                   [ unknown] Jerome Charaoui <jerome@riseup.net>
sub   rsa4096/0x6614611FBD6366E7 2015-11-12 [E]
sub   rsa4096/0xDB17405204ECB364 2015-11-12 [A] [expires: 2017-11-11]
pub   rsa4096/0xF823A2729883C97C 2014-08-26 [SC]
      Key fingerprint = 8ED6 C3F8 BAC9 DB7F C130  A870 F823 A272 9883 C97C
uid                   [ unknown] Lucas Kanashiro <kanashiro@debian.org>
uid                   [ unknown] Lucas Kanashiro <kanashiro.duarte@gmail.com>
sub   rsa4096/0xEE6E5D1A9C2F5EA6 2014-08-26 [E]
pub   rsa4096/0x2EC0FFB3B7301B1F 2014-08-29 [SC] [expires: 2017-04-06]
      Key fingerprint = 76A2 8E42 C981 1D91 E88F  BA5E 2EC0 FFB3 B730 1B1F
uid                   [ unknown] Niko Tyni <ntyni@debian.org>
uid                   [ unknown] Niko Tyni <ntyni@cc.helsinki.fi>
uid                   [ unknown] Niko Tyni <ntyni@iki.fi>
sub   rsa4096/0x129086C411868FD0 2014-08-29 [E] [expires: 2017-04-06]
pub   rsa4096/0xAA761F51CC10C92A 2016-06-20 [SC] [expires: 2018-06-20]
      Key fingerprint = C9DE 2EA8 93EE 4C86 BE73  973A AA76 1F51 CC10 C92A
uid                   [ unknown] Roger Shimizu <rogershimizu@gmail.com>
sub   rsa4096/0x2C2EE1D5DBE7B292 2016-06-20 [E] [expires: 2018-06-20]
sub   rsa4096/0x05C7FD79DD03C4BB 2016-06-20 [S] [expires: 2016-09-18]
Note that this completely neglects potential victims who are unconnected within the KSP set.

17 May 2016

Sean Whitton: seoulviasfo

I spent last night in San Francisco on my way from Tucson to Seoul. This morning as I headed to the airport, I caught the end of a shouted conversation between a down-and-out and a couple of middle school-aged girls, who ran away back to the Asian Art museum as the conversation ended. A security guard told the man that he needed him to go away. The wealth divide so visible here just isn t something you really see around Tucson. I m working on a new module for Propellor that s complicated enough that I need to think carefully about the Haskell in order to write produce a flexible and maintainable module. I ve only been doing an hour or so of work on it per day, but the past few days I wake up each day with an idea for restructuring yesterday s code. These ideas aren t anything new to me: I think I m just dredging up the understanding of Haskell I developed last year when I was studying it more actively. Hopefully this summer I can learn some new things about Haskell. Riding on the Bay Area Rapid Transit (BART) feels like stepping back in time to the years of Microsoft s ascendency, before we had a tech world dominated by Google and Facebook: the platform announcements are in a computerised voice that sounds like it was developed in the nineties. They ll eventually replace the old trains apparently some new ones are coming in 2017 so I feel privileged to have been able to ride the older ones. I feel the same about the Tube in London. I really appreciate old but supremely reliable and effective public transport. It reminds me of the Debian toolchain: a bit creaky, but maintained over a sufficiently long period that it serves everyone a lot better than newer offerings, which tend to be produced with ulterior corporate motives.

16 May 2016

Bits from Debian: New Debian Developers and Maintainers (March and April 2016)

The following contributors got their Debian Developer accounts in the last two months: Congratulations!

10 May 2016

Dmitry Shachnev: ReText 6.0 and PyMarkups 2.0 released

Today I have released the new major version of the ReText editor. This release would not be possible without Maurice van der Pot who was the author of the greatest features because of which the version number was bumped: Other news worth mentioning are: As usual, some bugs have been fixed (most of fixes have also been backported to 5.3.1 release), and some translations have been updated from Transifex. Please report any bugs you find to our issue tracker.

25 April 2016

Norbert Preining: G del and Daemons an excursion into literature

Explaining G del s theorems to students is a pain. Period. How can those poor creatures crank their mind around a Completeness and an Incompleteness Proof I understand that. But then, there are brave souls using G del s theorems to explain the world of demons to writers, in particular to answer the question:
You can control a Demon by knowing its True Name, but why?
goedel-glabrezu Very impressive. Found at worldbuilding.stackexchange.com, pointed to me by a good friend. I dare to full quote author Cort Ammon (nothing more is known), to preserve this masterpiece. Thanks!!!!
Use of their name forces them to be aware of the one truth they can never know. Tl/Dr: If demons seek permanent power but trust no one, they put themselves in a strange position where mathematical truisms paint them into a corner which leaves their soul small and frail holding all the strings. Use of their name suggests you might know how to tug at those strings and unravel them wholesale, from the inside out! Being a demon is tough work. If you think facing down a 4000lb Glabrezu without their name is difficult, try keeping that much muscle in shape in the gym! Never mind how many manicurists you go through keeping the claws in shape! I don t know how creative such demons truly are, but the easy route towards the perfect French tip that can withstand the rigors of going to the gym and benching ten thousand pounds is magic. Such a demon might learn a manicure spell from the nearby resident succubi. However, such spells are often temporary. No demon worth their salt is going to admit in front of a hero that they need a moment to refresh their mani before they can fight. The hero would just laugh at them. No, if a demon is going to do something, they re going to do it right, and permanently. Not just nice french tips with a clear lacquer over the top, but razor sharp claws that resharpen themselves if they are blunted and can extend or retract at will! In fact, come to think of it, why even go to the gym to maintain one s physique? Why not just cast a magic spell which permanently makes you into the glorious Hanz (or Franz) that the trainer keeps telling you is inside you, just waiting to break free. Just get the spell right once, and think of the savings you could have on gym memberships. Demons that wish to become more powerful, permanently, must be careful. If fairy tales have anything to teach is, it s that one of the most dangerous things you can do is wish for something forever, and have it granted. Forever is a very long time, and every spell has its price. The demon is going to have to make sure the price is not greater than the perks. It would be a real waste to have a manicure spell create the perfect claws, only to find that they come with a peculiar perchance to curve towards one s own heart in an attempt to free themselves from the demon that cast them. So we need proofs. We need proofs that each spell is a good idea, before we cast it. Then, once we cast it, we need proof that the spell actually worked intended. Otherwise, who knows if the next spell will layer on top perfectly or not. Mathematics to the rescue! The world of First Order Logic (FOL, or herefter simply logic ) is designed to offer these guarantees. With a few strokes of a pen, pencil, or even brush, it can write down a set of symbols which prove, without a shadow of a doubt, that not only will the spell work as intended, but that the side effects are manageable. How? So long as the demon can prove that they can cast a negation spell to undo their previous spell, the permanency can be reverted by the demon. With a few more fancy symbols, the demon can also prove that nobody else outside of the demon can undo their permanency. It s a simple thing for mathematics really. Mathematics has an amazing spell called reductio ad infinitum which does unbelievable things. However, there is a catch. There is always a catch with magic, even when that magic is being done through mathematics. In 1931, Kurt G del published his Incompleteness Theorems. These are 3 fascinating works of mathematical art which invoke the true names of First Order Logic and Set Theory. G del was able to prove that any system which is powerful enough to prove out all of algebra (1 + 1 = 2, 2 + 1 = 3, 3 * 5 = 15, etc.), could not prove its own validity. The self referential nature of proving itself crossed a line that First Order Logic simply could not return from. He proved that any system which tries must pick up one of these five traits: If the demon wants itself to be able to cancel the spell, his proof is going to have to include his own abilities, creating just the kind of self referential effects needed to invoke G del s incompleteness theorems. After a few thousand years, the demon may realize that this is folly. A fascinating solution the demon might choose is to explore the incomplete solution to G del s challenge. What if the demon permits the spell to change itself slightly, but in an unpredictable way. If the demon was a harddrive, perhaps he lets a single byte get changed by the spell in a way he cannot expect. This is actually enough to sidestep G del s work, by introducing incompleteness. However, now we have to deal with pesky laws of physic and magics. We can t just create something out of nothing, so if we re going to let the spell change a single byte of us, there must be a single byte of information, its dual, that is unleashed into the world. Trying to break such conservation laws opens up a whole can of worms. Better to let that little bit go free into the world. Well, almost. If you repeat this process a whole bunch of times, layering spells like a Matryoska doll, you re eventually left with a soul that is nothing but the leftover bits of your spells that you simply don t know enough about to use. If someone were collecting those bits and pieces, they might have the undoing of your entire self. You can t prove it, of course, but its possible that those pieces that you sent out into the world have the keys to undo your many layers of armor, and then you know they are the bits that can nullify your soul if they get there. So what do you do? You hide them. You cast your spells only on the darkest of nights, deep in a cave where no one can see you. If you need assistants, you make sure to ritualistically slaughter them all, lest one of them know your secret and whisper it to a bundle of reeds, The king has horns, if you are familiar with the old fairy tale. Make it as hard as possible for the secret to escape, and hope that it withers away to nothingness before someone discovers it, leaving you invincible. Now we come back to the name. The demon is going to have a name it uses to describe its whole self, including all of the layers of spellcraft it has acquired. This will be a great name like Abraxis, the Unbegotten Father or Satan, lord of the underworld. However, they also need to keep track of their smaller self, their soul. Failure to keep track of this might leave them open to an attack if they had missed a detail when casting their spells, and someone uncovered something to destroy them. This would be their true name, potentially something less pompous, like Gaylord Focker or Slartybartfarst. They would never use this name in company. Why draw attention to the only part of them that has the potential to be weak. So when the hero calls out for Slartybartfarst, the demon truly must pay attention. If they know the name the demon has given over the remains of their tattered soul, might they know how to undo the demon entirely? Fear would grip their inner self, like a child, having to once again consider that they might be mortal. Surely they would wish to destroy the hero that spoke the name, but any attempt runs the risk of falling into a trap and exposing a weakness (surely their mind is racing, trying to enumerate all possible weaknesses they have). It is surely better for them to play along with you, once you use their true name, until they understand you well enough to confidently destroy you without destroying themselves. So you ask for answers which are plausible. This one needs no magic at all. None of the rules are invalid in our world today. Granted finding a spell of perfect manicures might be difficult (believe me, some women have spent their whole life searching), but the rules are simply those of math. We can see this math in non-demonic parts of society as well. Consider encryption. An AES-256 key is so hard to brute force that it is currently believed it is impossible to break it without consuming 3/4 of the energy in the Milky Way Galaxy (no joke!). However, know the key, and decryption is easy. Worse, early implementations of AES took shortcuts. They actually left the signature of the path they took through the encryption in their accesses to memory. The caches on the CPU were like the reeds from the old fable. Merely observing how long it took to read data was sufficient to gather those reeds, make a flute, and play a song that unveils the encryption key (which is clearly either The king has horns or 1-2-3-4-5 depending on how secure you think your luggage combination is). Observing the true inner self of the AES encryption implementations was enough to completely dismantle them. Of course, not every implementation fell victim to this. You had to know the name of the implementation to determine which vulnerabilities it had, and how to strike at them. Or, more literally, consider the work of Alfred Whitehead, Principia Mathematica. Principia Mathematica was to be a proof that you could prove all of the truths in arithmetic using purely procedural means. In Principia Mathematica, there was no manipulation based on semantics, everything he did was based on syntax manipulating the actual symbols on the paper. G del s Incompleteness Theorem caught Principia Mathematica by the tail, proving that its own rules were sufficient to demonstrate that it could never accomplish its goals. Principia Mathematica went down as the greatest Tower of Babel of modern mathematical history. Whitehead is no longer remembered for his mathematical work. He actually left the field of mathematics shortly afterwards, and became a philosopher and peace advocate, making a new name for himself there. (by Cort Ammon)

18 April 2016

Norbert Preining: TeX Live 2016 pretest and Debian packages

Preparation for the release of TeX Live 2016 have started some time ago with the freeze of updates in TeX Live 2015. Yesterday we announced the official start of the pretest period. That means that we invite people to test the new release and help fixing bugs. At the same time I have uploaded the first set of packages of TeX Live 2016 for Debian to the experimental suite. texlive-2016-debian-pretest Concerning the binaries we do expect a few further changes, but hopefully nothing drastic. The most invasive change on the tlmgr side is that cryptographic signatures are now verified to guarantee authenticity of the packages downloaded, but this is rather irrelevant for Debian users (though I will look into how that works in user mode). Other than that, many packages have been updated or added since the last Debian packages, here is the unified list: acro, animate, appendixnumberbeamer, arabluatex, asapsym, asciilist, babel-belarusian, bibarts, biblatex-bookinarticle, biblatex-bookinother, biblatex-caspervector, biblatex-chicago, biblatex-gost, biblatex-ieee, biblatex-morenames, biblatex-opcit-booktitle, bibtexperllibs, bxdvidriver, bxenclose, bxjscls, bxnewfont, bxpapersize, chemnum, cjk-ko, cochineal, csplain, cstex, datetime2-finnish, denisbdoc, dtx, dvipdfmx-def, ejpecp, emisa, fithesis, fnpct, font-change-xetex, forest, formation-latex-ul, gregoriotex, gzt, hausarbeit-jura, hyperxmp, imakeidx, jacow, l3, l3kernel, l3packages, latex2e, latex2e-help-texinfo-fr, latex-bib2-ex, libertinust1math, lollipop, lt3graph, lua-check-hyphen, lualibs, luamplib, luatexja, mathalfa, mathastext, mcf2graph, media9, metrix, nameauth, ndsu-thesis, newtx, normalcolor, noto, nucleardata, nwejm, ocgx2, pdfcomment, pdfpages, pkuthss, polyglossia, proposal, qcircuit, reledmac, rmathbr, savetrees, scanpages, stex, suftesi, svrsymbols, teubner, tex4ebook, tex-ini-files, tikzmark, tikzsymbols, titlesec, tudscr, typed-checklist, ulthese, visualtikz, xespotcolor, xetex-def, xetexko, ycbook, yinit-otf. Enjoy.

25 March 2016

Daniel Pocock: With Facebook, everybody can betray Jesus

It's Easter time again and many of those who are Christian will be familiar with the story of the Last Supper and the subsequent betrayal of Jesus by his friend Judas. If Jesus was around today and didn't immediately die from a heart attack after hearing about the Bishop of Bling (who spent $500,000 just renovating his closet and flew first class to visit the poor in India), how many more of his disciples would betray him and each other by tagging him in selfies on Facebook? Why do people put the short term indulgence of social media ahead of protecting privacy in the long term? Is this how you treat your friends?

7 March 2016

Ritesh Raj Sarraf: Laptop Mode Tools 1.69 Released

Today is an auspicious day. For those who know (or follow) the Hindu religion will be familiar; Today is Maha Shivaratri On this day, It is great delight for me to be able to release Laptop Mode Tools, version 1.69 This release adds on many bug fixes and some enhancements. There is a new module (disabled by default) for cpuhotplug. The release tarball also includes a basic PolicyKit file for convenience, that packagers can use for the Laptop Mode Tools Graphical Configuration Interface. Apart from the policykit file, the graphical invocation script has been slightly fine tuned to work under pkexec and sudo. Some defaults have been tuned based on user requests - This should improve in situations where your External USB Mouse/Keyboard used to suspend after idle time periods. In January this year, I had the pleasure of meeting Bart Samwel in person at his office in Amsterdam. For those who don't know, Bart started off Laptop Mode Tools around 2004, and I took over maintenance around 2008. Meeting in person has been a delight, especially with the ones you work over email for years; This is something I cherished last year at Debconf 15 too. IMPORTANT:- Until now, Laptop Mode Tools project was hosted on Bart's webserver. Now, as you read, the homepage and mailing lists have changed. I'd urge all users to subscribe to the new mailing list and update their bookmarks. Homepage: https://github.com/rickysarraf/laptop-mode-tools/wiki Mailing List: https://groups.google.com/d/forum/laptop-mode-tools Note: For users who are not comfortable with creating a google a/c for mailing list subscription, you should still be able to subscribe with your personal email address. Please follow the steps in the mentioned homepage. Since last couple releases, I've also been providing RPM packages for Opensuse Tumbleweed and Fedora. The same should be available on the github release page. The Debian package will follow shortly in the Debian repository. Thank you and a Happy Maha Shivaratri. Har Har Mahadev.
1.69 - Mon Mar  7 17:44:42 IST 2016
    * Wait for all forked modules to complete
    * Add new module: cputhotplug
    * CPU online/offine is reverse here
    * Fix shell syntax
    * Install policykit file
    * Detach polling daemon from main process
    * Do NOT touch speed if throttling is not set
    * Restore to MAX speed when back to AC Power
    * Fix manpage about DISABLE_ETHERNET_ON_BATTERY setting
    * Update documentation about ENABLE_LAPTOP_MODE_ON_AC setting
    * Change powersaving default for USB class devices
    * Drop usbhid from default (black)list
    * Add usb keyboard driver to the list default list
    * Be consistent with passing args to LMT from different invokers
    * Honor device plug/unplug events on a per device basis;
      like how Chromium initially submitted this patch
    * Also be consistent with option in the event parser
    * Update links in README.md
    * Update new github homepage location
    * Add lmt-config-gui shell script

Categories:

Keywords:

Like:

29 February 2016

Chris Lamb: Free software activities in February 2016

Here is my monthly update covering a large part of what I have been doing in the free software world (previously):
Debian
  • Updated travis.debian.net a hosted script to easily test and build Debian packages on the Travis CI continuous integration platform to support:
    • Automatic bumping of the version number in debian/changelog based on TRAVIS_BUILD_NUMBER. (#14)
    • Security repositories. Thanks to Stefan Jenkner for the initial pull request. These are additionally now enabled by default. (#15)
    • The backports repositories. (#13)
  • Applied #812830 and #812830 from James Clark to the Debian Archive Kit to improve the interface of various webpages it generates.
  • Updated the SSL certificate for try.diffoscope.org, a hosted version of the diffoscope in-depth and content-aware diff utility. Thanks to Bytemark for sponsoring the hardware.
  • Worked on my slides for Reproducible Builds - fulfilling the original promise of free software, to be presented at FOSSASIA '16.
My work in the Reproducible Builds project was also covered in more depth in Lunar's weekly reports (#40, #41, #42, #43)
LTS

This month I have been paid to work 18 hours on Debian Long Term Support (LTS). In that time I did the following:
  • "Frontdesk" duty for the week of 22nd 28th, triaging CVEs, etc.
  • Proofread announcements, etc. for the upcoming migration to wheezy-lts.
  • Issued DLA 417-1 for xdelta3 to fix a buffer overflow that allowed arbitrary code execution from input files.
  • Issued DLA 420-1 for libmatroska, correcting a heap information leak.
  • Issued DLA 428-1 for websvn fixing a cross-site scripting vulnerability.
  • Issued DLA 429-1 for pixman fixing a buffer overflow issue.
  • Issued DLA 430-1 & DLA 431-1 for libfcgi and libfcgi-perl respectfully, fixing a remote denial-of-service (DoS) vulnerability.

Uploads
  • redis (2:3.0.7-2) Correcting my SOURCE_DATE_EPOCH reproducibility patch as the conditional was accidentally inverted. Thanks to Reiner Herrmann (deki).
  • disque (1.0~rc1-5) Making the parallel SOURCE_DATE_EPOCH patch change and additionally tidying the packaging after introducing procps as a build-dependency.


RC bugs


I also filed 137 FTBFS bugs against aac-tactics, angular.js, astyle, bcftools, blacs-mpi, bogofilter, boxes, caldav-tester, ccdproc, ckeditor, coq-float, cqrlog, dasher, django-recurrence, dspdfviewer, eclipse-egit, ess, etcd, felix-latin, fio, flexml, funny-manpages, gap-atlasrep, garmin-plugin, gitlab, gnome-mines, graphicsmagick, haskell-nettle, healpy, hg-git, hunspell, hwloc, ijs, ipset, janest-core-extended, jpathwatch, kcompletion, kcompletion, keyrings.alt, kodi-pvr-hts, kodi-pvr-vdr-vnsi, libcommons-compress-java, libgnome2-wnck-perl, libkate, liblrdf, libm4ri, libnet-server-mail-perl, libsis-jhdf5-java, libspectre, libteam, libwnck, libwnckmm, libxkbcommon, lombok, lombok-patcher, mako, maven-dependency-analyzer, mopidy-mpris, mricron, multcomp, netty-3.9, numexpr, ocaml-textutils, openimageio, openttd-openmsx, osmcoastline, osmium-tool, php-guzzle, php-net-smartirc, plexus-component-metadata, polari, profitbricks-client, pyentropy, pynn, pyorbital, pypuppetdb, python-aioeventlet, python-certifi, python-hglib, python-kdcproxy, python-matplotlib-venn, python-mne, python-mpop, python-multipletau, python-pbh5tools, python-positional, python-pydot-ng, python-pysam, python-snuggs, python-tasklib, r-cran-arm, r-cran-httpuv, r-cran-tm, rjava, ros-geometry-experimental, ros-image-common, ros-pluginlib, ros-ros-comm, rows, rr, ruby-albino, ruby-awesome-print, ruby-default-value-for, ruby-fast-gettext, ruby-github-linguist, ruby-gruff, ruby-hipchat, ruby-omniauth-crowd, ruby-packetfu, ruby-termios, ruby-thinking-sphinx, ruby-tinder, ruby-versionomy, ruby-zentest, sbsigntool, scikit-learn, scolasync, sdl-image1.2, signon-ui, sisu-guice, sofa-framework, spykeutils, ssreflect, sunpy, tomcat-maven-plugin, topmenu-gtk, trocla, trocla, tzdata, verbiste, wcsaxes, whitedune, wikidiff2, wmaker, xmlbeans, xserver-xorg-input-aiptek & zeroc-icee-java.

FTP Team

As a Debian FTP assistant I ACCEPTed 107 packages: androguard, android-platform-dalvik, android-platform-development, android-platform-frameworks-base, android-platform-frameworks-native, android-platform-libnativehelper, android-platform-system-core, android-platform-system-extras, android-platform-tools-base, android-sdk-meta, apktool, armci-mpi, assertj-core, bart, bind9, caja, caldav-tester, clamav, class.js, diamond, diffoscope, django-webpack-loader, djangocms-admin-style, dnsvi, esptool, fuel-astute, gcc-6-cross, gcc-6-cross-ports, gdal, giella-core, gnupg, golang-github-go-ini-ini, golang-github-tarm-serial, gplaycli, gradle-jflex-plugin, haskell-mountpoints, haskell-simple, hurd, iceweasel, insubstantial, intellij-annotations, jetty9, juce, keyrings.alt, leptonlib, libclamunrar, libdate-pregnancy-perl, libgpg-error, libhtml5parser-java, libica, libvoikko, linux, llvm-toolchain-3.8, lombok-patcher, mate-dock-applet, mate-polkit, mono-reference-assemblies, mxt-app, node-abab, node-array-equal, node-array-flatten, node-array-unique, node-bufferjs, node-cors, node-deep-extend, node-original, node-setimmediate, node-simplesmtp, node-uglify-save-license, node-unpipe, oar, openjdk-8, openjdk-9, pg8000, phantomjs, php-defaults, php-random-compat, php-symfony-polyfill, pnetcdf, postgresql-debversion, pulseaudio-dlna, pyconfigure, pyomo, pysatellites, python-fuelclient, python-m3u8, python-pbh5tools, python-qtpy, python-shellescape, python-tunigo, pyutilib, qhull, r-cran-rjsonio, r-cran-tm, reapr, ruby-fog-dynect, scummvm-tools, symfony, talloc, tesseract, twextpy, unattended-upgrades, uwsgi, vim-command-t, win-iconv, xkcdpass & xserver-xorg-video-ast. I additionally REJECTed 4 packages.

26 December 2015

Norbert Preining: Debian/TeX Live 2015.20151226-1

Before I disappear into the Japanese winter holidays, here the Christmas update of all TeX Live packages. Nothing spectacular here, just the usual bunch of updates of loads of packages. There is a bug with respect to a file move from one package to another, will be fixed in an upload soon. Debian - TeX Live 2015 Updated packages apnum, appendix, archaeologie, babel, babel-french, babel-greek, babel-hungarian, bangorcsthesis, beamer-verona, bhcexam, bibarts, bidi, bxjscls, chemfig, commado, comprehensive, context, cslatex, csplain, ctanify, doclicense, dvips, ejpecp, exsheets, fbb, fibeamer, fithesis, fontools, francais-bst, gitinfo2, glossaries, gnuplottex, greek-fontenc, hyph-utf8, indextools, ksp-thesis, l3build, l3experimental, l3kernel, l3packages, latexcourse-rug, lollipop, lstbayes, lualibs, luaotfload, luatexja, luatexko, luatodonotes, make4ht, mcf2graph, media9, mex, mfirstuc, mhchem, mptopdf, nameauth, nevelok, newtx, nicetext, nucleardata, ocgx2, pageslts, petri-nets, phonrule, pkuthss, powerdot, proofread, proposal, pst-labo, pst-solides3d, reledmac, sapthesis, serbian-lig, showlabels, t2, tcolorbox, tempora, testhyphens, tetex, tex4ebook, tex4ht, texlive-scripts, texsis, thuthesis, toptesi, tudscr, ucharcat, versonotes, xcharter, xepersian, xifthen, xindy, xint. New packages beamertheme-detlevcm, beamertheme-metropolis, beamertheme-phnompenh, beamer-verona, bitpattern, carbohydrates, delimseasy, drawmatrix, einfuehrung2, ellipse, ffslides, gitlog, greektonoi, ksp-thesis, longfbox, options, simpler-wick, unicode-data. Enjoy.

15 December 2015

Bartosz Fe&#324;ski: Once again two full-time days to work on Debian

Thanks to my employer I had opportunity to spent two days just working on my packages.
I know it s kinda sad that I have to wait for these two special days to do my (volunteer but still) job. Anyway. I was able to dig through all changes in our policy and standards and update some of my packages. Changes include: potrace (1.13-1) unstable; urgency=low * The Akamai Technologies paid volunteer days release.
* New upstream version.
* Bumped Standards-Version (no changes needed) Bartosz Fenski Thu, 10 Dec 2015 10:37:54 +0100 ipcalc (0.41-5) unstable; urgency=low * The Akamai Technologies paid volunteer days release.
* Updating to the newest standards of basically everything.
Very cool experience
* CGI script is now optional and put into examples (Closes: #763032) Bartosz Fenski Thu, 10 Dec 2015 13:51:12 +0100 dibbler (1.0.1-1) unstable; urgency=low * The Akamai Technologies paid volunteer days release.
* New upstream release (Closes: #780232, #795493)
fixes segfaults in TClntCfgMgr::validateConfig (Closes: #732697)
* Includes debugging packages (Closes: #732707)
* Bumped Standards-Version (no changes needed) Bartosz Fenski Thu, 10 Dec 2015 13:33:56 +0100 calcurse (4.0.0-1) unstable; urgency=low * The Akamai Technologies paid volunteer days release.
* New upstream version.
* Properly handles apts file with new line sign (Closes: #749282)
* Explicitly uses autotools-dev dependency.
* Bumped Standards-Version. (no changes needed) Bartosz Fenski Mon, 14 Dec 2015 10:55:30 +0100 libstatgrab (0.91-1) unstable; urgency=low * The Akamai Technologies paid volunteer days release.
* New upstream version (Closes: #804480)
* ACK NMUs, thanks Manuel!
* Bumped Standards-Version (no changes needed) Bartosz Fenski Mon, 14 Dec 2015 14:27:37 +0100 httpie (0.9.2-1) unstable; urgency=low * The Akamai Technologies paid volunteer days release.
* ACK previous NMU, thanks a lot Vincent!
* Bumped required python-requests version (Closes: #802540) Bartosz Fenski Mon, 14 Dec 2015 15:39:37 +0100 These uploads fixed 7 bugs and 9 lintian warnings/errors. Except that I reviewed mydumper package update and I uploaded it for Mateusz Kijowski who tries to became NM. Thanks Akamai!

8 June 2015

Timo Jyrinki: Quick Look: Dell XPS 13 Developer Edition (2015) with Ubuntu 14.04 LTS

I recently obtained the newest Dell's Ubuntu developer offering, XPS 13 (2015, model 9343). I opted in for FullHD non-touch display, mostly because of better battery life, the actual no need for higher resolution, and matte screen which is great outside. Touch would have been "nice-to-have", but in my work I don't really need it.

The other specifications include i7-5600U CPU, 8GB RAM, 256GB SSD [edit: lshw], and of course Ubuntu 14.04 LTS pre-installed as OEM specific installation. It was not possible to directly order it from Dell site, as Finland is reportedly not online market for Dell... The wholesale company however managed to get two models on their lists and so it's now possible to order via retailers. [edit: here are some country specific direct web order links however US, DE, FR, SE, NL]

In this blog post I give a quick look on how I started up using it, and do a few observations on the pre-installed Ubuntu included. I personally was interested in using the pre-installed Ubuntu like a non-Debian/Ubuntu developer would use it, but Dell has also provided instructions for Ubuntu 15.04, Debian 7.0 and Debian 8.0 advanced users among else. Even if not using the pre-installed Ubuntu, the benefit from buying an Ubuntu laptop is obviously smaller cost and on the other hand contributing to free software (by paying for the hardware enablement engineering done by or purchased by Dell).
Unboxing
The Black Box. (and white cat)

Opened box.






First time lid opened, no dust here yet!
First time boot up, transitioning from the boot logo to a first time Ubuntu video.
A small clip from the end of the welcoming video.
First time setup. Language, Dell EULA, connecting to WiFi, location, keyboard, user+password.
Creating recovery media. I opted not to do this as I had happened to read that it's highly recommended to install upgrades first, including to this tool.
Finalizing setup.
Ready to log in!
It's alive!
Not so recent 14.04 LTS image... lots of updates.
Problems in the First BatchUnfortunately the first batch of XPS 13:s with Ubuntu are going to ship with some problems. They're easy to fix if you know how to, but it's sad that they're there to begin with in the factory image. There is no knowledge when a fixed batch will start shipping - July maybe?

First of all, installing software upgrades stops. You need to run the following command via Dash Terminal once: sudo apt-get install -f (it suggests upgrading libc-dev-bin, libc6-dbg, libc6-dev and udev). After that you can continue running Software Updater as usual, maybe rebooting in between.

Secondly, the fixed touchpad driver is included but not enabled by default. You need to enable the only non-enabled Additional Driver as seen in the picture below or instructed in Youtube.

Dialog enabling the touchpad driver.

Clarification: you can safely ignore the two paragraphs below, they're just for advanced users like me who want to play with upgraded driver stacks.

Optionally, since I'm interested in the latest graphics drivers especially in case of a brand new hardware like Intel Broadwell, I upgraded my Ubuntu to use the 14.04.2 Hardware Enablement stack (matches 14.10 hardware support): sudo apt install --install-recommends libgles2-mesa-lts-utopic libglapi-mesa-lts-utopic linux-generic-lts-utopic xserver-xorg-lts-utopic libgl1-mesa-dri-lts-utopic libegl1-mesa-drivers-lts-utopic libgl1-mesa-glx-lts-utopic:i386

Even though it's much better than a normal Ubuntu 14.10 would be since many of the Dell fixes continue to be in use, some functionality might become worse compared to the pre-installed stack. The only thing I have noticed though is the internal microphone not working anymore out-of-the-box, requiring a kernel patch as mentioned in Dell's notes. This is not a surprise since the real eventual upstream support involves switching from HDA to I2S and during 14.10 kernel work that was not nearly done. If you're excited about new drivers, I'd recommend waiting until August when the 15.04 based 14.04.3 stack is available (same package names, but 'vivid' instead of 'utopic'). [edit: I couldn't resist myself when I saw linux-generic-lts-vivid (3.19 kernel) is already in the archives. 14.04.2 + that gives me working microphone again!]
ConclusionDell XPS 13 Developer Edition with Ubuntu 14.04 LTS is an extremely capable laptop + OS combination nearing perfection, but not quite there because of the software problems in the launch pre-install image. The laptop looks great, feels like a quality product should and is very compact for the screen size.

I've moved over all my work onto it and everything so far is working smoothly in my day-to-day tasks. I'm staying at Ubuntu 14.04 LTS and using my previous LXC configuration to run the latest Ubuntu and Debian development versions. I've also done some interesting changes already like LUKS In-Place Conversion, converting the pre-installed Ubuntu into whole disk encrypted one (not recommended for the faint hearted, GRUB reconfiguration is a bit of a pain).

I look happily forward to working a few productive years with this one!

28 May 2015

Sven Hoexter: RMS, free software and where I fail the goal

You might have already read this comment by RMS in the Guardian. That comment and a recent discussion about the relevance of GPL changes post GPLv2 made me think again about the battle RMS started to fight. While some think RMS should "retire", at least I still fail on my personal goal to not depend on non-free software and services. So for me this battle is far from over, and here is my personal list of "non-free debt" I've to pay off. general purpose systems aka your computer Looking at the increasing list of firmware blobs required to use a GPU, wireless chipsets and more and more wired NICs, the situation seems to be worse then in the late 90s. Back then the primary issue was finding supported hardware, but the driver was free. Nowadays even the open sourced firmware often requires obscure patched compilers to build. If I look at this stuff I think the OpenBSD project got that right with the more radical position. Oh and then there is CPU microcode. I'm not yet sure what to think about it, but in the end it's software and it's not open source. So it's non-free software running on my system. Maybe my memory is blurred due to the fact, that the seperation of firmware from the Linux kernel, and proper firmware loading got implemented only years later. I remember the discussion about the pwc driver and its removal from Linux. Maybe the situation wasn't better at that time but the firmware was just hidden inside the Linux driver code? On my system at work I've to add the Flash plugin to the list due to my latest test with Prezi which I'll touch later. I also own a few Humble Indie bundles. I played parts of Osmos after a recommendation by Joey Hess, I later finished to play through Limbo and I got pretty far with Machinarium on a Windows system I still had at that time. I also tried a few others but never got far or soon lost interest. Another thing I can not really get rid of is unrar because of stuff I need to pull from xda-developer links just to keep a cell phone running. Update: Josh Triplett pointed out that there is unar available in the Debian archive. And indeed that one works on the rar file I just extracted. Android ecosystem I will soon get rid of a stock S3 mini and try to replace it with a moto g loaded with CyanogenMod. That leaves me with a working phone with a OS that just works because of a shitload of non-free blobs. The time and work required to get there is another story. Among others you need a new bootloader that requires a newer fastboot compared to what we have in Jessie, and later you also need the newer adb to be able to sideload the CM image. There I gave in and just downloaded the pre build SDK from Google. And there you've another binary I did not even try to build from source. Same for the CM image itself, though that's not that much different from using a GNU/Linux distribution if you ignore the trust issues. It's hard to trust the phone I've build that way, but it's the best I can get at the moment with at least some bigger chunks of free software inside. So let's move to the applications on the phone. I do not use GooglePlay, so I rely on f-droid and freeware I can download directly from the vendor. "Cloud" services This category mixes a lot with the stuff listed above, most of them are not only an application, in fact Threema and Wunderlist are useless without the backend service. And Opera is just degraded to a browser - and to be replaced with Firefox - if you discount the compression proxy. The other big addition in this category is Prezi. We tried it out at work after it got into my focus due to a post by Dave Aitel. It's kind of the poster child of non-freeness. It requires a non-free, unstable, insecure and half way deprecated browser plugin to work, you can not download your result in a useful format, you've to buy storage for your presentation at this one vendor, you've to pay if you want to keep your presentation private. It's the perfect lockin situation. But still it's very convenient, prevents a lot of common mistakes you can make when you create a presentation and they invented a new concept of presenting. I know about impress.js(hosted on a non-free platform by the way, but at least you can export it from there) and I also know about hovercraft. I'm impressed by them, but it's still not close to the ease of use of Prezi. So here you can also very prominently see the cost of free and non-free software. Invest the time and write something cool with CSS3 and impress.js or pay Prezi to just klick yourself through. To add something about the instability - I had to use a windows laptop for presenting with Prezi because the Flash plugin on Jessie crashed in the presentation mode, I did not yet check the latest Flash update. I guess that did not make the situation worse, it already is horrible. Update: Daniel Kahn Gillmore pointed out that you can combine inkscape with sozi, though the Debian package is in desperate need for an active maintainer, see also #692989. I also use kind of database services like duden.de and dict.cc. When I was younger you bought such things printed on dead trees but they did not update very well. Thinking a bit further, a Certification Authority is not only questionable due to the whole trust issue, they also provide OCSP responder as kind of a web service. And I've already had the experience what the internet looks like when the OCSP systems of GlobalSign failed. So there is still a lot to fight for and a lot of "personal non-free debt" to pay off.

12 April 2015

Bartosz Fe&#324;ski: lack of SRS on @debian.org mail servers

Any good reason that we re not using SRS to make SPF aware servers happy?
That s kinda strange given that @debian.org is doing only forwarding.

20 March 2015

Steve McIntyre: Tour of Australia

Jo and I just got back from our massive holiday in Australia. We had an awesome time overall, fitting in lots of stuff in 4 weeks. Time for a quick write-up and some photos! Ayers Rock We flew into Sydney, then straight onto Uluru for the obligatory sunset and sunrise viewings. We didn't climb the Rock, both for sensitivity reasons and (to be more honest!) it looked way too much like hard work in 40-plus degree heat. Ghan train Coach over to Alice Springs, where we had a very quick look around before taking the Ghan train down to Adelaide. The train was fun for a day, and we got to see a lot of desert. In Adelaide, we had a look around the city (lovely colonial feel!) and got a couple of evenings in fun comedy shows at the Fringe. Great fun! Cuddling a sleepy wombat! On to Tasmania, where we did a quick (3 days) run around the island by car: into Hobart, up the east coast. Stopped in Swansea (a nice version!) for some heavenly Devonshire teas, then on up to Grindelwald near Launceston. Visited Trowunna Wildlife Park to see (and cuddle!) lots of local animals, which was amazing - Jo's favourite day of the holiday. Then on to Queenstown and drive back down to Hobart past some impossibly beautiful views around Cradle Mountain. Tassie's gorgeous - like the best bits of Scotland, Wales and Cornwall but with even fewer people and better weather. Sydney Opera House Next, on to Sydney for Harry and Cath's wedding. We stayed up in Chatswood. Not knowing anything about the area beforehand, we were a little surprised to basically find ourselves back in Hong Kong! We spent most of the weekend catching up with friends from the wedding group, and the wedding itself was at Quarantine Station, overlooking the harbour. It couldn't have been a more perfect location / weather / view for our friends' big day! We squeezed in a couple of the open-top bus tours of Sydney on the Sunday, but got caught in the horrendous storm that hit and ended up sheltering downstairs under cover on the bus. I'm told Bondi is lovely, but it all looked grey from the bus. :-P Puffing Billy, Yarra Valley Down to Melbourne on the train (bit of a wasted day, in hindsight), where we wandered around the city quite a bit. Caught up with an old friend who lives there for a day, and we did a wine tour up the Yarra Valley which was fun too. Snorkelling at the Reef - all OK! Up to Port Douglas, where we headed out to the Reef for my highlight of the holiday: a snorkelling tour with some local marine experts who showed us the local flora and fauna. We also visited a local Aboriginal cultural centre, skyrail and scenic railway around Kuranda village. Koala! :-) Down to Hervey Bay and a 1-day tour of Fraser Island - an amazing place in combination with quite a thrill-ride experience just being driven around on the sand tracks. Finally, down to Brisbane where we wandered around and visited both the Lone Pine Koala Sanctuary (more cuddles!) and the Gold Coast. Then the long flights home. Whew! We're knackered now. We knew we could't fit everything in, but we're glad we travelled all over and got tastes of almost everything. Now we can work out where we want to spend more time on our future visit(s). We'll definitely want to head over and see Perth and some of WA next time, and definitely more time in Tasmania, Sydney and Adelaide.

2 December 2014

Matthias Klumpp: How to build a cross-distro package with Limba

Disclaimer: Limba is stilllimba-small in a very early stage of development. Bugs happen, and I give to guarantees on API stability yet. Limba is a very simple cross-distro package installer, utilizing OverlayFS found in recent Linux kernels (>= 3.18). As example I created a small Limba package for one of the Qt5 demo applications, and I would like to share the process of creating Limba packages it s quite simple, and I could use some feedback on how well the resulting packages work on multiple distributions. I assume that you have compiled Limba and installed it how that is done is described in its README file. So, let s start. 1. Prepare your application The cool thing about Limba is that you don t really have to do many changes on your application. There are a few things to pay attention to, though: This needs to be done so your application will find its data at runtime. Additionally, you need to write an AppStream metadata file, and find out which stuff your application depends on. 2. Create package metadata & install software 1.1 Basics Now you can create the metadata necessary to build a Limba package. Just run
cd /path/to/my/project
lipkgen make-template
This will create a pkginstall directory, containing a control file and a metainfo.xml file, which can be a symlink to the AppStream metadata, or be new metadata. Now, configure your application with /opt/bundle as install prefix (-DCMAKE_INSTALL_PREFIX=/opt/bundle, prefix=/opt/bundle, etc.) and install it to the pkginstall/inst_target directory. 1.2 Handling dependencies If your software has dependencies on other packages, just get the Limba packages for these dependencies, or build new ones. Then place the resulting IPK packages in the pkginstall/repo directory. Ideally, you should be able to fetch Limba packages which contain the software components directly from their upstream developers. Then, open the pkginstall/control file and adjust the Requires line. The names of the components you depend on match their AppStream-IDs (<id/> tag in the AppStream XML document). Any version-relation (>=, >>, <<, <=, <>) is supported, and specified in brackets after the component-id. The resulting control-file might look like this:
Format-Version: 1.0

Requires: Qt5Core (>= 5.3), Qt5DBus (>= 5.3), libpng12
If the specified dependencies are in the repo/ subdirectory, these packages will get installed automatically, if your application package is installed. Otherwise, Limba depends on the user to install these packages manually there is no interaction with the distribution s package-manager (yet?). 3. Building the package In order to build your package, make sure the content in inst_target/ is up to date, then run
lipkgen build pkginstall/
This will build your package and output it in the pkginstall/ directory. 4. Testing the package You can now test your package, Just run
sudo lipa install package.ipk
Your software should install successfully. If you provided a .desktop file in $prefix/share/applications, you should find your application in your desktop s application-menu. Otherwise, you can run a binary from the command-line, just append the version of your package to the binary name (bash-comletion helps). Alternatively, you can use the runapp command, which lets you run any binary in your bundle/package, which is quite helpful for debugging (since the environment a Limba-installed application is run is different from the one of other applications). Example:
runapp $ component_id -$ version :/bin/binary-name
And that s it! :-) I used these steps to create a Limba package for the OpenGL Qt5 demo on Tanglu 2 (Bartholomea), and tested it on Kubuntu 15.04 (Vivid) with KDE, as well as on an up-to-date Fedora 21, with GNOME and without any Qt or KDE stuff installed: qt5demo-limba-kubuntuqt5demo-limba-fedora I encountered a few obstacles when building the packages, e.g. Qt5 initially didn t find the right QPA plugin that has been fixed by adjusting a config file in the Qt5Gui package. Also, on Fedora, a matching libpng was missing, so I included that as well. You can find the packages at Github, currently (but I am planning to move them to a different place soon). The biggest issue with Limba is at time, that it needs Linux 3.18, or an older kernel with OverlayFS support compiled in. Apart from that and a few bugs, the experience is quite smooth. As soon as I am sure there are now hidden fundamental issues, I can think of implementing more features, like signing packages and automatically updating them. Have fun playing around with Limba!

21 November 2014

Daniel Pocock: PostBooks 4.7 packages available, xTupleCon 2014 award

I recently updated the PostBooks packages in Debian and Ubuntu to version 4.7. This is the version that was released in Ubuntu 14.10 (Utopic Unicorn) and is part of the upcoming Debian 8 (jessie) release. Better prospects for Fedora and RHEL/CentOS/EPEL packages As well as getting the packages ready, I've been in contact with xTuple helping them generalize their build system to make packaging easier. This has eliminated the need to patch the makefiles during the build. As well as making it easier to support the Debian/Ubuntu packages, this should make it far easier for somebody to create a spec file for RPM packaging too. Debian wins a prize While visiting xTupleCon 2014 in Norfolk, I was delighted to receive the Community Member of the Year award which I happily accepted not just for my own efforts but for the Debian Project as a whole. Steve Hackbarth, Director of Product Development at xTuple, myself and the impressive Community Member of the Year trophy This is a great example of the productive relationships that exist between Debian, upstream developers and the wider free software community and it is great to be part of a team that can synthesize the work from so many other developers into ready-to-run solutions on a 100% free software platform. Receiving this award really made me think about all the effort that has gone into making it possible to apt-get install postbooks and all the people who have collectively done far more work than myself to make this possible: Here is a screenshot of the xTuple web / JSCommunicator integration, it was one of the highlights of xTupleCon: and gives a preview of the wide range of commercial opportunities that WebRTC is creating for software vendors to displace traditional telecommunications providers. xTupleCon also gave me a great opportunity to see new features (like the xTuple / Drupal web shop integration) and hear about the success of consultants and their clients deploying xTuple/PostBooks in various scenarios. The product is extremely strong in meeting the needs of manufacturing and distribution and has gained a lot of traction in these industries in the US. Many of these features are equally applicable in other markets with a strong manufacturing industry such as Germany or the UK. However, it is also flexible enough to simply disable many of the specialized features and use it as a general purpose accounting solution for consulting and services businesses. This makes it a good option for many IT freelancers and support providers looking for a way to keep their business accounts in a genuinely open source solution with a strong SQL backend and a native Linux desktop interface.

4 November 2014

Rapha&#235;l Hertzog: My Debian LTS report for October 2014

During October, I spent 10 hours on paid LTS work. I should have worked 4 hours more, but for various reasons this did not happen. Instead I ll spend 4 more hours in November. During this time, I did the following: After a few months of work on LTS, I m starting to have a better grasp on the worflow and on what can be done or not. But I m still astonished that we have so few squeeze users on the mailing list. If you re using Squeeze, please subscribe to the list and test the packages that contributors are submitting for tests/validation. It really helps to have some feedback from real users before releasing an update, in particular when the Debian contributor who prepared the update is not a user of said package not everybody has the skills required to prepare security updates, but everybody can help test packages, you have no excuses. ;-) And we still need more organizations joining the LTS project, either by providing help (like Catalyst did by letting Andrew Bartlett work on LTS, thanks to them!) or by sponsoring the
project
and letting others do the work.

No comment Liked this article? Click here. My blog is Flattr-enabled.

25 July 2014

Juliana Louback: Extending an xTuple Business Object

xTuple is in my opinion incredibly well designed; the code is clean and the architecture ahderent to a standardized structure. All this makes working with xTuple software quite a breeze. I wanted to integrate JSCommunicator into the web-based xTuple version. JSCommunicator is a SIP communication tool, so my first step was to create an extension for the SIP account data. Luckily for me, the xTuple development team published an awesome tutorial for writing an xTuple extension. xTuple cleverly uses model based business objects for the various features available. This makes customizing xTuple very straightforward. I used the tutorial mentioned above for writing my extension, but soon noticed my goals were a little different. A SIP account has 3 data fields, these being the SIP URI, the account password and an optional display name. xTuple currently has a business object in the core code for a User Account and it would make a lot more sense to simply add my 3 fields to this existing business object rather than create another business object. The tutorial very clearly shows how to extend a business object with another business object, but not how to extend a business object with only new fields (not a whole new object). Now maybe I m just a whole lot slower than most people, but I had a ridiculously had time figuring this out. Mind you, this is because I m slow, because the xTuple documentation and code is understandable and as self-explanatory as it gets. I think it just takes a bit to get used to. Either way, I thought this just might be useful to others so here is how I went about it. Setup First you ll have to set up your xTuple development environment and fork the xtuple and xtuple-extesions repositories as shown in this handy tutorial. A footnote I d like to add is please verify that your version of Vagrant (and anything else you install) is the one listed in the tutorial. I think I spent like two entire days or more on a wild goose (bug) chase trying to set up my environment when the cause of all the errors was that I somehow installed an older version of Vagrant - 1.5.4 instead of 1.6.3. Please don t make the same mistake I did. Actually if for some reason you get the following error when you try using node:
<<ERROR 2014-07-10T23:52:46.948Z>> Unrecoverable exception. Cannot call method 'extend' of undefined
    at /home/vagrant/dev/xtuple/lib/backbone-x/source/model.js:37:39
    at Object.<anonymous> (/home/vagrant/dev/xtuple/lib/backbone-x/source/model.js:1364:3)
    ...
chances are, you have the wrong version. That s what happened to me. The Vagrant Virtual Development Environment automatically installs and configures everything you need, it s ready to go. So if you find yourself installing and updating and apt-gets and etc, you probably did something wrong. Coding So by now we should have the Vagrant Virtual Development Environment set up and the web app up and running and accessible at localhost:8443. So far so good. Disclaimer: You will note that much of this is similar - or rather, nearly identical - to xTuple s tutorial but there are some small but important differences and a few observations I think might be useful. Other Disclaimer: I m describing how I did it, which may or may not be up to snuff . Works for me though. Schema First let s make a schema for the table we will create with the new custom fields. Be sure to create the correct directory stucture, aka /path/to/xtuple-extensions/source/<YOUR EXTENSION NAME>/database/source or in my case /path/to/xtuple-extensions/source/sip_account/database/source, and create the file create_sa_schema.sql, sa is the name of my schema. This file will contain the following lines:
do $$
  /* Only create the schema if it hasn't been created already */
  var res, sql = "select schema_name from information_schema.schemata where schema_name = 'sa'",
  res = plv8.execute(sql);
  if (!res.length)  
    sql = "create schema sa; grant all on schema sa to group xtrole;"
    plv8.execute(sql);
   
$$ language plv8;
Of course, feel free to replace sa with your schema name of choice. All the code described here can be found in my xtuple-extensions fork, on the sip_ext branch. Table We ll create a table containing your custom fields and a link to an existing table - the table for the existing business object you want to extend. If you re wondering why make a whole new table for a few extra fields, here s a good explanation, the case in question is adding fields to the Contact business object. You need to first figure out what table you want to link to. This might not be uber easy. I think the best way to go about it is to look at the ORMs. The xTuple ORMs are a JSON mapping between the SQL tables and the object-oriented world above the database, they re .json files found at path/to/xtuple/node_modules/xtuple/enyo-client/database/orm/models for the core business objects and at path/to/xtuplenyo-client/extensions/source/<EXTENSION NAME>/database/orm/models for exension business objects. I ll give two examples. If you look at contact.json you will see that the Contact business object refers to the table cntct . Look for the type : Contact on the line above, so we know it s the Contact business object. In my case, I wanted to extend the UserAccount and UserAccountRelation business objects, so check out user_account.json. The table listed for UserAccount is xt.usrinfo and the table listed for UserAccountRelation is xt.usrlite. A closer look at the sql files for these tables (usrinfo.sql and usrlite.sql) revealed that usrinfo is in fact a view and usrlite is A light weight table of user information used to avoid punishingly heavy queries on the public usr view . I chose to refer to xt.usrlite - that or I received error messages when trying the other table names. Now I ll make the file /path/to/xtuple-extensions/source/sip_account/database/source/usrlitesip.sql, to create a table with my custom fields plus the link to the urslite table. Don t quote me on this, but I m under the impression that this is the norm for naming the sql file joining tables: the name of the table you are referring to ( usrlite in this case) and your extension s name. Content of usrlitesip.sql:
select xt.create_table('usrlitesip', 'sa');
select xt.add_column('usrlitesip','usrlitesip_id', 'serial', 'primary key', 'sa');
select xt.add_column('usrlitesip','usrlitesip_usr_username', 'text', 'references xt.usrlite (usr_username)', 'sa');
select xt.add_column('usrlitesip','usrlitesip_uri', 'text', '', 'sa');
select xt.add_column('usrlitesip','usrlitesip_name', 'text', '', 'sa');
select xt.add_column('usrlitesip','usrlitesip_password', 'text', '', 'sa');
comment on table sa.usrlitesip is 'Joins User with SIP account';
Breaking it down, line 1 creates the table named usrlitesip (no duh), line 2 is for the primary key (self-explanatory). You can then add any columns you like, just be sure to add one that references the table you want to link to. I checked usrlite.sql and saw the primary key is usr_username, be sure to use the primary key of the table you are referencing. You can check what you made by executing the .sql files like so:
$ cd /path/to/xtuple-extensions/source/sip_account/database/source
$ psql -U admin -d dev -f create_sa_schema.sql
$ psql -U admin -d dev -f usrlitesip.sql
After which you will see the table with the columns you created if you enter:
$ psql -U admin -d dev -c "select * from sa.usrlitesip;"
Now create the file /path/to/xtuple-extensions/source/sip_account/database/source/manifest.js to put the files together and in the right order. It should contain:
 
  "name": "sip_account",
  "version": "1.4.1",
  "comment": "Sip Account extension",
  "loadOrder": 999,
  "dependencies": ["crm"],
  "databaseScripts": [
    "create_sa_schema.sql",
    "usrlitesip.sql",
    "register.sql"
  ]
 
I think the name has to be the same you named your extension directory as in /path/to/xtuple-extensions/source/<YOUR EXTENSION NAME>. I think the comment can be anything you like and you want your loadOrder to be high so it s the last thing installed (as it s an add on.) So far we are doing exactly what s instructed in the xTuple tutorial. It s repetitive, but I think you can never have too many examples to compare to. In databaseScripts you will list the two .sql files you just created for the schema and the table, plus another file to be made in the same directory named register.sql. I m not sure why you have to make the register.sql or even if you indeed have to. If you leave the file empty, there will be a build error, so put a ; in the register.sql or remove the line register.sql from manifest.js as I think for now we are good without it. Now let s update the database with our new extension:
$ cd /path/to/xtuple
$ ./scripts/build_app.js -d dev -e ../xtuple-extensions/source/sip_account
$ psql -U admin -d dev -c "select * from xt.ext;"
That last command should display a table with a list of extensions; the ones already in xtuple like crm and billing and some others plus your new extension, in this case sip_account . When you run build_app.js you ll probably see a message along the lines of <Extension name> has no client code, not building client code and that s fine because yeah, we haven t worked on the client code yet. ORM Here s where things start getting different. So ORMs link your object to an SQL table. But we DON T want to make a new business object, we want to extend an existing business object, so the ORM we will make will be a little different than the xTuple tutorial. Steve Hackbarth kindly explained this new business object/existing business object ORM concept here. First we ll create the directory /path/to/xtuple-extensions/source/sip_account/database/orm/ext, according to xTuple convention. ORMs for new business objects would be put in /path/to/xtuple-extensions/source/sip_account/database/orm/models. Now we ll create the .json file /path/to/xtuple-extensions/source/sip_account/database/orm/ext/user_account.jscon for our ORM. Once again, don t quote me on this, but I think the name of the file should be the name of the business object you are extending, as is done in the turorial example extending the Contact object. In our case, UserAccount is defined in user_account.json and that s what I named my extension ORM too. Here s what you should place in it:
 1 [
 2    
 3     "context": "sip_account",
 4     "nameSpace": "XM",
 5     "type": "UserAccount",
 6     "table": "sa.usrlitesip",
 7     "isExtension": true,
 8     "isChild": false,
 9     "comment": "Extended by Sip",
10     "relations": [
11        
12         "column": "usrlitesip_usr_username",
13         "inverse": "username"
14        
15     ],
16     "properties": [
17        
18         "name": "uri",
19         "attr":  
20           "type": "String",
21           "column": "usrlitesip_uri",
22           "isNaturalKey": true
23          
24        ,
25        
26         "name": "displayName",
27         "attr":  
28           "type": "String",
29           "column": "usrlitesip_name"
30          
31        ,
32        
33         "name": "sipPassword",
34         "attr":  
35           "type": "String",
36           "column": "usrlitesip_password"
37          
38        
39     ],
40     "isSystem": true
41    ,
42    
43     "context": "sip_account",
44     "nameSpace": "XM",
45     "type": "UserAccountRelation",
46     "table": "sa.usrlitesip",
47     "isExtension": true,
48     "isChild": false,
49     "comment": "Extended by Sip",
50     "relations": [
51        
52         "column": "usrlitesip_usr_username",
53         "inverse": "username"
54        
55     ],
56     "properties": [
57        
58         "name": "uri",
59         "attr":  
60           "type": "String",
61           "column": "usrlitesip_uri",
62           "isNaturalKey": true
63          
64        ,
65        
66         "name": "displayName",
67         "attr":  
68           "type": "String",
69           "column": "usrlitesip_name"
70          
71        ,
72        
73         "name": "sipPassword",
74         "attr":  
75           "type": "String",
76           "column": "usrlitesip_password"
77          
78        
79     ],
80     "isSystem": true
81    
82 ]
Note the context is my extension name, because the context + nameSpace + type combo has to be unique. We already have a UserAccount and UserAccountRelation object in the XM namespace in the xtuple context in the original user_account.json, now we will have a UserAccount and UserAccountRelation object in the XM namespace in the sip_account conext. What else is important? Note that isExtension is true on lines 7 and 47 and the relations item contains the column of the foreign key we referenced. This is something you might want to verify: column (lines 12 and 52) is the name of the attribute on your table. When we made a reference to the primary key usr_usrname from the xt.usrlite table we named that column usrlitesip_usr_usrname. But the inverse is the attribute name associated with the original sql column in the original ORM. Did I lose you? I had a lot of trouble with this silly thing. In the original ORM that created a new UserAccount business object, the primary key attribute is named username , as can be seen here. That is what should be used for the inverse value. Not the sql column name (usr_username) but the object attribute name (username). I m emphasizing this because I made that mistake and if I can spare you the pain I will. If we rebuild our extension everything should come along nicely, but you won t see any changes just yet in the web app because we haven t created the client code. Client Create the directory /path/to/xtuple-extensions/source/sip_account/client which is where we ll keep all the client code. Extend Workspace View I want the fields I added to show up on the form to create a new User Account, so I need to extend the view for the User Account workspace. I ll start by creating a directory /path/to/xtuple-extensions/source/sip_account/client/views and in it creating a file named workspace.js containing this code:
XT.extensions.sip_account.initWorkspace = function ()  
	var extensions = [
  	 kind: "onyx.GroupboxHeader", container: "mainGroup", content: "_sipAccount".loc() ,
  	 kind: "XV.InputWidget", container: "mainGroup", attr: "uri"  ,
  	 kind: "XV.InputWidget", container: "mainGroup", attr: "displayName"  ,
  	 kind: "XV.InputWidget", container: "mainGroup", type:"password", attr: "sipPassword"  
	];
	XV.appendExtension("XV.UserAccountWorkspace", extensions);
 ;
So I m initializing my workspace and creating an array of items to add (append) to view XV.UserAccountWorkspace. The first item is this onyx.GroupboxHeader which is a pretty divider for my new form fields, the kind you find in the web app at Setup > User Accounts, like Overview . I have no idea what other options there are for container other than mainGroup , so let s stick to that. I ll explain content: _sipAccount .loc() in a bit. Next I created three input fields of the XV.InputWidget kind. This also confused me a bit as there are different kinds of input to be used, like dropdowns and checkboxes. The only advice I can give is snoop around the webapp, find an input you like and look up the corresponding workspace.js file to see what was used. What we just did is (should be) enough for the new fields to show up on the User Account form. But before we see things change, we have to package the client. Create the file /path/to/xtuple-extensions/source/sip_account/client/views/package.js. This file is needed to package groups of files and indicates the order the files should be loaded (for more on that, see this). For now, all the file will contain is:
enyo.depends(
"workspace.js"
);
You also need to package the views directory containing workspace.js, so create the file Create the file /path/to/xtuple-extensions/source/sip_account/client/package.js and in it show that the directory views and its contents must be part of the higher level package:
enyo.depends(
"views"
);
I like to think of it as a box full of smaller boxes. This will sound terrible, but apparently you also need to create the file /path/to/xtuple-extensions/source/sip_account/client/core.js containing this line:
XT.extensions.icecream =  ;
I don t know why. As soon as I find out I ll be sure to inform you. As we ve added a file to the client directory, be sure to update /path/to/xtuple-extensions/source/sip_account/client/package.js so it included the new file:
enyo.depends(
"core.js",
"views"
);
Translations Remember _sipAccount .loc() in our workspace.js file? xTuple has great internationalization support and it s easy to use. Just create the directory and file /path/to/xtuple-extensions/source/sip_account/client/en/strings.js and in it put key-value pairs for labels and their translation, like this:
(function ()  
  "use strict";
  var lang = XT.stringsFor("en_US",  
    "_sipAccount": "Sip Account",
    "_uri": "Sip URI",
    "_displayName": "Display Name",
    "_sipPassword": "Password"
   );
  if (typeof exports !== 'undefined')  
    exports.language = lang;
   
 ());
So far I included all the labels I used in my Sip Account form. If you write the wrong label (key) or forget to include a corresponding key-value pair in strings.js, xTuple will simply name your lable _lableName , underscore and all. Now build your extension and start up the server:
$ cd /path/to/xtuple 
$ ./scripts/build_app.js -d dev -e ../xtuple-extensions/source/sip_account
$ node node-datasource/main.js
If the server is already running, just stop it and restart it to reflect your changes. Now if you go to Setup > User Accounts and click the + button, you should see a nice little addition to the form with a Sip Account divider and three new fields. Nice, eh? Extend Parameters Currently you can search your User Accounts list using any of the User Account fields. It would be nice to be able to search with the Sip account fields we added as well. To do that, let s create the directory /path/to/xtuple-extensions/source/sip_account/client/widgets and there create the file parameter.js to extend XV.UserAccountListParameters. One again, you ll have to look this up. In the xTuple code you ll find the application s parameter.js in /path/to/xtuple/enyo-client/application/source/widgets. Search for the business object you are extending (for example, XV.UserAccount) and look for some combination of the business object name and Parameters . If there s more than one, try different ones. Not a very refined method, but it worked for me. Here s the content of our parameter.js:
XT.extensions.sip_account.initParameterWidget = function ()  
  var extensions = [
     kind: "onyx.GroupboxHeader", content: "_sipAccount".loc() ,
     name: "uri", label: "_uri".loc(), attr: "uri", defaultKind: "XV.InputWidget" ,
     name: "displayName", label: "_displayName".loc(), attr: "displayName", defaultKind: "XV.InputWidget" 
  ];
  XV.appendExtension("XV.UserAccountListParameters", extensions);
 ;
Node that I didn t include a search field for the password attribute for obvious reasons. Now once again, we package this new code addition by creating a /path/to/xtuple-extensions/source/sip_account/client/widgets/package.js file:
enyo.depends(
"parameter.js"
);
We also have to update /path/to/xtuple-extensions/source/sip_account/client/package.js:
enyo.depends(
"core.js",
"widgets"
"views"
);
Rebuild the extension (and restart the server) and go to Setup > User Accounts. Press the magnifying glass button on the upper left side of the screen and you ll see many options for filtering the User Accounts, among them the SIP Uri and Display Name. Extend List View You might want your new fields to show up on the list of User Accounts. There s a bit of an issue here because unlike what we did in workspace.js and parameter.js, we can t append new things to the list of UserAccounts with the funciton XV.appendExtension(args). First I tried overwriting the original UserAccountList, which works but it s far from ideal as this could result in a loss of data from the core implementation. After some discussion with the xTuple dev community, now there s a better alternative: Create the file /path/to/xtuple-extensions/source/sip_account/client/views/list.js and add the following:
1 var oldUserAccountListCreate = XV.UserAccountList.prototype.create;
2 
3 XV.UserAccountList.prototype.create = function ()  
4   oldUserAccountListCreate.apply(this, arguments);
5   this.createComponent(
6    kind: "XV.ListColumn", container: this.$.listItem, components: [
7         kind: "XV.ListAttr", attr: "uri" 
8    ] )
9  ;
To understand what I m doing, check out the XV.UserAccountList implementation in /path/to/xtuple/enyo-client/application/source/views/list.js the entire highlighted part. What we are doing is extending XV.UserAccountList through prototype-chaining ; this is how inheritance works with Enyo. In line 1 we create a prototype and in line 4 we inherit the features including original components array which the list is based on. We then create an additional component immitating the setup shown in XV.UserAccountList: An XV.ListColumn containing an XV.ListAttr, which should be placed in the XV.ListItem components array as is done with the existing columns (refer to implementation). Components can or should (?) have names which are used to access said components. You d refer to a specific component by the this.$.componentName hash. The components in XV.UserAccountList don t have names, so Enyo automatically names them (apparently) based on the kind name, for example something of the kind ListItem is named listItem. I found this at random after a lot of trial and error and it s not a bullet proof solution. Can be bettered. It s strange because if you encapsulate that code with
XT.extensions.sip_account.initList = function ()  
 //Code here
 ;
as is done with parameter.js and workspace.js (and in the xTuple tutorial you are supposed to do that with a new business object), it doesn t work. I have no idea why. This might be wrong or against xTuple coding norms; I will find out and update this post ASAP. But it does work this way. * shrugs * That said, as we ve created the list.js file, we need to ad it to our package by editing /path/to/xtuple-extensions/source/sip_account/client/views/package.js:
enyo.depends(
"list.js",
"workspace.js"
);
That s all. Rebuild the app and restart your server and when you select Setup > User Accounts in the web app you should see the Sip URI displayed on the User Accounts that have the Sip Account data. Add a new User Account to try this out.

22 June 2014

Andreas Barth: Summer in the city ...

Next.

Previous.